Continual Learning through Retrieval and Imagination

نویسندگان

چکیده

Continual learning is an intellectual ability of artificial agents to learn new streaming labels from sequential data. The main impediment continual catastrophic forgetting, a severe performance degradation on previously learned tasks. Although simply replaying all previous data or continuously adding the model parameters could alleviate issue, it impractical in real-world applications due limited available resources. Inspired by mechanism human brain deepen its past impression, we propose novel framework, Deep Retrieval and Imagination (DRI), which consists two components: 1) embedding network that constructs unified space without arrival tasks; 2) generative produce additional (imaginary) based memory. By retrieving experiences corresponding imaginary data, DRI distills knowledge rebalances further mitigate forgetting. Theoretical analysis demonstrates can reduce loss approximation error improve robustness through retrieval imagination, bringing better generalizability network. Extensive experiments show performs significantly than existing state-of-the-art methods effectively alleviates

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continual Learning Through Synaptic Intelligence

While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that br...

متن کامل

Supplementary Material: Continual Learning Through Synaptic Intelligence

As an additional experiment, we trained a CNN (4 convolutional, followed by 2 dense layers with dropout; cf. main text) on the split CIFAR-10 benchmark. We used the same multi-head setup as in the case of split MNIST using Adam (η = 1 × 10−3, β1 = 0.9, β2 = 0.999, minibatch size 256). First, we trained the network for 60 epochs on the first 5 categories (Task A). At this point the training accu...

متن کامل

Continual Learning through Evolvable Neural Turing Machines

Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM) approach is able to perform one-shot learning in a reinforcement learning task without catastroph...

متن کامل

intentional and incidental vocabulary learning through listening comprehension

یکی از مسایل مهم در یادگیری زبان ، یادگیری لغت است. هدف این تحقیق ، بررسی یاد گیری شیوه های مستقیم و غیر مستقیم فراگیری لغت درفرایند شنیداری زبان است و اینکه کدامیک از آنها برای بهبود و تسهیل یادگیری واژگان زبان آموزان سطح متوسطه موثرترند.این تحقیق تفاوت بین تاثیر فراگیری مستقیم و غیر مستقیم لغت در فرایند شنیداری زبان در کوتاه مدت را بررسی می کند ، سپس تفاوت بین تاثیر یادگیری مستقیم و غیر مستقی...

15 صفحه اول

Continual Coevolution Through Complexification

In competitive coevolution, the goal is to establish an “arms race” that will lead to increasingly sophisticated strategies. However, in practice, the process often leads to idiosyncrasies rather than continual improvement. Applying the NEAT method for evolving neural networks to a competitive simulated robot duel domain, we will demonstrate that (1) as evolution progresses the networks become ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i8.20837